AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
8k long text processing

# 8k long text processing

Btlm 3b 8k Base
Apache-2.0
BTLM-3B-8k-base is a 3-billion-parameter language model with an 8k context length, trained on the 627-billion-token SlimPajama dataset, delivering performance comparable to open-source 7-billion-parameter models.
Large Language Model Transformers English
B
cerebras
2,078
262
Mpt 30b
Apache-2.0
MPT-30B is an open-source large language model trained by MosaicML, based on a decoder-only Transformer architecture, pre-trained on 1 trillion English text and code tokens, supporting an 8k context window and efficient inference.
Large Language Model Transformers Other
M
mosaicml
2,021
342
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase